Picture for Geng Yuan

Geng Yuan

ViThinker: Active Vision-Language Reasoning via Dynamic Perceptual Querying

Add code
Feb 02, 2026
Viaarxiv icon

Q-realign: Piggybacking Realignment on Quantization for Safe and Efficient LLM Deployment

Add code
Jan 13, 2026
Viaarxiv icon

From Bits to Chips: An LLM-based Hardware-Aware Quantization Agent for Streamlined Deployment of LLMs

Add code
Jan 07, 2026
Viaarxiv icon

Rethinking the Potential of Layer Freezing for Efficient DNN Training

Add code
Aug 20, 2025
Viaarxiv icon

TSLA: A Task-Specific Learning Adaptation for Semantic Segmentation on Autonomous Vehicles Platform

Add code
Aug 17, 2025
Figure 1 for TSLA: A Task-Specific Learning Adaptation for Semantic Segmentation on Autonomous Vehicles Platform
Figure 2 for TSLA: A Task-Specific Learning Adaptation for Semantic Segmentation on Autonomous Vehicles Platform
Figure 3 for TSLA: A Task-Specific Learning Adaptation for Semantic Segmentation on Autonomous Vehicles Platform
Figure 4 for TSLA: A Task-Specific Learning Adaptation for Semantic Segmentation on Autonomous Vehicles Platform
Viaarxiv icon

RCR-Router: Efficient Role-Aware Context Routing for Multi-Agent LLM Systems with Structured Memory

Add code
Aug 06, 2025
Viaarxiv icon

ACE: Exploring Activation Cosine Similarity and Variance for Accurate and Calibration-Efficient LLM Pruning

Add code
May 28, 2025
Figure 1 for ACE: Exploring Activation Cosine Similarity and Variance for Accurate and Calibration-Efficient LLM Pruning
Figure 2 for ACE: Exploring Activation Cosine Similarity and Variance for Accurate and Calibration-Efficient LLM Pruning
Figure 3 for ACE: Exploring Activation Cosine Similarity and Variance for Accurate and Calibration-Efficient LLM Pruning
Figure 4 for ACE: Exploring Activation Cosine Similarity and Variance for Accurate and Calibration-Efficient LLM Pruning
Viaarxiv icon

KerZOO: Kernel Function Informed Zeroth-Order Optimization for Accurate and Accelerated LLM Fine-Tuning

Add code
May 24, 2025
Viaarxiv icon

Structured Agent Distillation for Large Language Model

Add code
May 20, 2025
Figure 1 for Structured Agent Distillation for Large Language Model
Figure 2 for Structured Agent Distillation for Large Language Model
Figure 3 for Structured Agent Distillation for Large Language Model
Figure 4 for Structured Agent Distillation for Large Language Model
Viaarxiv icon

Perturbation-efficient Zeroth-order Optimization for Hardware-friendly On-device Training

Add code
Apr 28, 2025
Figure 1 for Perturbation-efficient Zeroth-order Optimization for Hardware-friendly On-device Training
Figure 2 for Perturbation-efficient Zeroth-order Optimization for Hardware-friendly On-device Training
Figure 3 for Perturbation-efficient Zeroth-order Optimization for Hardware-friendly On-device Training
Figure 4 for Perturbation-efficient Zeroth-order Optimization for Hardware-friendly On-device Training
Viaarxiv icon